cognitiveclass.ai logo

Classification with Python

In this notebook we try to practice all the classification algorithms that we have learned in this course.

We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.

Let's first load required libraries:

In [1]:
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline

About dataset

This dataset is about past loans. The Loan_train.csv data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:

Field Description
Loan_status Whether a loan is paid off on in collection
Principal Basic principal loan amount at the
Terms Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule
Effective_date When the loan got originated and took effects
Due_date Since it’s one-time payoff schedule, each loan has one single due date
Age Age of applicant
Education Education of applicant
Gender The gender of applicant

Let's download the dataset

In [2]:
#!wget -O loan_train.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/FinalModule_Coursera/data/loan_train.csv

Load Data From CSV File

In [3]:
df = pd.read_csv('loan_train.csv')
df.head()
Out[3]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender
0 0 0 PAIDOFF 1000 30 9/8/2016 10/7/2016 45 High School or Below male
1 2 2 PAIDOFF 1000 30 9/8/2016 10/7/2016 33 Bechalor female
2 3 3 PAIDOFF 1000 15 9/8/2016 9/22/2016 27 college male
3 4 4 PAIDOFF 1000 30 9/9/2016 10/8/2016 28 college female
4 6 6 PAIDOFF 1000 30 9/9/2016 10/8/2016 29 college male
In [4]:
df.shape
Out[4]:
(346, 10)

Convert to date time object

In [5]:
df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()
Out[5]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender
0 0 0 PAIDOFF 1000 30 2016-09-08 2016-10-07 45 High School or Below male
1 2 2 PAIDOFF 1000 30 2016-09-08 2016-10-07 33 Bechalor female
2 3 3 PAIDOFF 1000 15 2016-09-08 2016-09-22 27 college male
3 4 4 PAIDOFF 1000 30 2016-09-09 2016-10-08 28 college female
4 6 6 PAIDOFF 1000 30 2016-09-09 2016-10-08 29 college male

Data visualization and pre-processing

Let’s see how many of each class is in our data set

In [6]:
df['loan_status'].value_counts()
Out[6]:
PAIDOFF       260
COLLECTION     86
Name: loan_status, dtype: int64

260 people have paid off the loan on time while 86 have gone into collection

Let's plot some columns to underestand data better:

In [7]:
# notice: installing seaborn might takes a few minutes
#!conda install -c anaconda seaborn -y
In [8]:
import seaborn as sns

bins = np.linspace(df.Principal.min(), df.Principal.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'Principal', bins=bins, ec="k")

g.axes[-1].legend()
plt.show()
In [9]:
bins = np.linspace(df.age.min(), df.age.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'age', bins=bins, ec="k")

g.axes[-1].legend()
plt.show()

Pre-processing: Feature selection/extraction

Let's look at the day of the week people get the loan

In [10]:
df['dayofweek'] = df['effective_date'].dt.dayofweek
bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'dayofweek', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()

We see that people who get the loan at the end of the week don't pay it off, so let's use Feature binarization to set a threshold value less than day 4

In [11]:
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3)  else 0)
df.head()
Out[11]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender dayofweek weekend
0 0 0 PAIDOFF 1000 30 2016-09-08 2016-10-07 45 High School or Below male 3 0
1 2 2 PAIDOFF 1000 30 2016-09-08 2016-10-07 33 Bechalor female 3 0
2 3 3 PAIDOFF 1000 15 2016-09-08 2016-09-22 27 college male 3 0
3 4 4 PAIDOFF 1000 30 2016-09-09 2016-10-08 28 college female 4 1
4 6 6 PAIDOFF 1000 30 2016-09-09 2016-10-08 29 college male 4 1

Convert Categorical features to numerical values

Let's look at gender:

In [12]:
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
Out[12]:
Gender  loan_status
female  PAIDOFF        0.865385
        COLLECTION     0.134615
male    PAIDOFF        0.731293
        COLLECTION     0.268707
Name: loan_status, dtype: float64

86 % of female pay there loans while only 73 % of males pay there loan

Let's convert male to 0 and female to 1:

In [13]:
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
df.head()
Out[13]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender dayofweek weekend
0 0 0 PAIDOFF 1000 30 2016-09-08 2016-10-07 45 High School or Below 0 3 0
1 2 2 PAIDOFF 1000 30 2016-09-08 2016-10-07 33 Bechalor 1 3 0
2 3 3 PAIDOFF 1000 15 2016-09-08 2016-09-22 27 college 0 3 0
3 4 4 PAIDOFF 1000 30 2016-09-09 2016-10-08 28 college 1 4 1
4 6 6 PAIDOFF 1000 30 2016-09-09 2016-10-08 29 college 0 4 1

One Hot Encoding

How about education?

In [14]:
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
Out[14]:
education             loan_status
Bechalor              PAIDOFF        0.750000
                      COLLECTION     0.250000
High School or Below  PAIDOFF        0.741722
                      COLLECTION     0.258278
Master or Above       COLLECTION     0.500000
                      PAIDOFF        0.500000
college               PAIDOFF        0.765101
                      COLLECTION     0.234899
Name: loan_status, dtype: float64

Features before One Hot Encoding

In [15]:
df[['Principal','terms','age','Gender','education']].head()
Out[15]:
Principal terms age Gender education
0 1000 30 45 0 High School or Below
1 1000 30 33 1 Bechalor
2 1000 15 27 0 college
3 1000 30 28 1 college
4 1000 30 29 0 college

Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame

In [16]:
Feature = df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)
Feature.head()
Out[16]:
Principal terms age Gender weekend Bechalor High School or Below college
0 1000 30 45 0 0 0 1 0
1 1000 30 33 1 0 1 0 0
2 1000 15 27 0 0 0 0 1
3 1000 30 28 1 1 0 0 1
4 1000 30 29 0 1 0 0 1

Feature Selection

Let's define feature sets, X:

In [17]:
X = Feature
X[0:5]
Out[17]:
Principal terms age Gender weekend Bechalor High School or Below college
0 1000 30 45 0 0 0 1 0
1 1000 30 33 1 0 1 0 0
2 1000 15 27 0 0 0 0 1
3 1000 30 28 1 1 0 0 1
4 1000 30 29 0 1 0 0 1

What are our lables?

In [18]:
y = df['loan_status'].values
y[0:5]
Out[18]:
array(['PAIDOFF', 'PAIDOFF', 'PAIDOFF', 'PAIDOFF', 'PAIDOFF'],
      dtype=object)

Normalize Data

Data Standardization give data zero mean and unit variance (technically should be done after train test split)

In [19]:
X= preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]
Out[19]:
array([[ 0.51578458,  0.92071769,  2.33152555, -0.42056004, -1.20577805,
        -0.38170062,  1.13639374, -0.86968108],
       [ 0.51578458,  0.92071769,  0.34170148,  2.37778177, -1.20577805,
         2.61985426, -0.87997669, -0.86968108],
       [ 0.51578458, -0.95911111, -0.65321055, -0.42056004, -1.20577805,
        -0.38170062, -0.87997669,  1.14984679],
       [ 0.51578458,  0.92071769, -0.48739188,  2.37778177,  0.82934003,
        -0.38170062, -0.87997669,  1.14984679],
       [ 0.51578458,  0.92071769, -0.3215732 , -0.42056004,  0.82934003,
        -0.38170062, -0.87997669,  1.14984679]])

Classification

Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the model You should use the following algorithm:

  • K Nearest Neighbor(KNN)
  • Decision Tree
  • Support Vector Machine
  • Logistic Regression

__ Notice:__

  • You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.
  • You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.
  • You should include the code of the algorithm in the following cells.

K Nearest Neighbor(KNN)

Notice: You should find the best k to build the model with the best accuracy.\ warning: You should not use the loan_test.csv for finding the best k, however, you can split your train_loan.csv into train and test to find the best k.

In [20]:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print('Shape of X training set {}'.format(X_train.shape),'&',' Size of Y training set {}'.format(y_train.shape))
Shape of X training set (276, 8) &  Size of Y training set (276,)
In [21]:
Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
In [22]:
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
for n in range(1,Ks):    
    #Train Model and Predict  
    neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
    yhat = neigh.predict(X_test)
    mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)    
    std_acc[n-1] = np.std(yhat==y_test)/np.sqrt(yhat.shape[0])
print(mean_acc)
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1) 

for n in range(1,Ks):
    neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
    yhat = neigh.predict(X_test)
    if (metrics.accuracy_score(y_test, yhat) == mean_acc.max()):
        KNNmodel = neigh
        break
[0.67142857 0.65714286 0.71428571 0.68571429 0.75714286 0.71428571
 0.78571429 0.75714286 0.75714286]
The best accuracy was with 0.7857142857142857 with k= 7

Decision Tree

In [23]:
from sklearn import metrics
from sklearn.tree import DecisionTreeClassifier
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.6, random_state=2)
DSmodel = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
DSmodel.fit(X_train,y_train)
predTree = DSmodel.predict(X_test)
In [24]:
print (predTree [0:5])
print (y_test [0:5])
['PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF']
['PAIDOFF' 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF']
In [25]:
print("DecisionTrees's Accuracy: ", metrics.accuracy_score(y_test, predTree))
DecisionTrees's Accuracy:  0.7259615384615384

Support Vector Machine

In [26]:
from sklearn import svm
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=4)
clf = svm.SVC(kernel='rbf', probability=True)
clf.fit(X_train, y_train) 
predsvm = clf.predict(X_test)
In [27]:
print (predsvm [0:5])
print (y_test [0:5])
['PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF']
['PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF']
In [28]:
print("Support Vector Machine's Accuracy: ", metrics.accuracy_score(y_test, predsvm))
Support Vector Machine's Accuracy:  0.75

Logistic Regression

In [29]:
from sklearn.linear_model import LogisticRegression
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=6)
LR = LogisticRegression(C=0.01, solver='liblinear')
LR.fit(X_train,y_train)
predlr = LR.predict(X_test)
In [30]:
print (predlr [0:5])
print (y_test [0:5])
['PAIDOFF' 'PAIDOFF' 'COLLECTION' 'COLLECTION' 'PAIDOFF']
['PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'COLLECTION' 'PAIDOFF']
In [31]:
print("Logistic Regression's Accuracy: ", metrics.accuracy_score(y_test, predlr))
y_lr = (y_test, predlr)
Logistic Regression's Accuracy:  0.7884615384615384

Model Evaluation using Test set

In [32]:
from sklearn.metrics import jaccard_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss

First, download and load the test set:

In [33]:
#!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv

Load Test set for evaluation

In [34]:
test_df = pd.read_csv('loan_test.csv')
test_df.head()
Out[34]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender
0 1 1 PAIDOFF 1000 30 9/8/2016 10/7/2016 50 Bechalor female
1 5 5 PAIDOFF 300 7 9/9/2016 9/15/2016 35 Master or Above male
2 21 21 PAIDOFF 1000 30 9/10/2016 10/9/2016 43 High School or Below female
3 24 24 PAIDOFF 1000 30 9/10/2016 10/9/2016 26 college male
4 35 35 PAIDOFF 800 15 9/11/2016 9/25/2016 29 Bechalor male
In [35]:
Models = {'KNN':KNNmodel, 'DecisionTree':DSmodel, 'SVM':clf, 'LogisticRegression':LR}
test_df['due_date'] = pd.to_datetime(test_df['due_date'])
test_df['effective_date'] = pd.to_datetime(test_df['effective_date'])
test_df['dayofweek'] = test_df['effective_date'].dt.dayofweek
test_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3)  else 0)
test_df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
test_Feature = test_df[['Principal','terms','age','Gender','weekend']]
test_Feature = pd.concat([test_Feature,pd.get_dummies(test_df['education'])], axis=1)
test_Feature.drop(['Master or Above'], axis = 1, inplace=True)
test_Feature.fillna(test_Feature.mean(), inplace=True)
X_test = preprocessing.StandardScaler().fit(test_Feature).transform(test_Feature)
y_test = test_df['loan_status'].values
test_df['loan_status'].value_counts()
Out[35]:
PAIDOFF       40
COLLECTION    14
Name: loan_status, dtype: int64
In [36]:
Jaccards = []
for algorithm, model in Models.items():
    locals()['yhat_{0}'.format(algorithm)] = model.predict(X_test)
    locals()['Jaccard_{0}'.format(algorithm)] = \
    [
        jaccard_score(
            y_test, 
            locals()['yhat_{0}'.format(algorithm)], 
            pos_label = 'COLLECTION'
        ),
        jaccard_score(
            y_test, 
            locals()['yhat_{0}'.format(algorithm)], 
            pos_label = 'PAIDOFF'
        )
    ]
    Jaccards.append(locals()['Jaccard_{0}'.format(algorithm)])
JaccardDict = dict(zip(list(Models.keys()), Jaccards))
print("Jaccard:")
pd.DataFrame(JaccardDict).transpose().iloc[:,[1]]
Jaccard:
Out[36]:
1
KNN 0.653846
DecisionTree 0.764706
SVM 0.740741
LogisticRegression 0.784314
In [37]:
# F1-score
F1_scores = []
for algorithm, model in Models.items():
    locals()['yhat_{0}'.format(algorithm)] = model.predict(X_test)
    locals()['F1_score_{0}'.format(algorithm)] = \
        f1_score(
            y_test, 
            locals()['yhat_{0}'.format(algorithm)], 
            average = None
        )
    F1_scores.append(locals()['F1_score_{0}'.format(algorithm)])
F1_scoresDict = dict(zip(list(Models.keys()), F1_scores))
print("F1-score:")
pd.DataFrame(F1_scoresDict).transpose().iloc[:,[1]]
F1-score:
Out[37]:
1
KNN 0.790698
DecisionTree 0.866667
SVM 0.851064
LogisticRegression 0.879121
In [38]:
# LogLoss
predlr_probas = LR.predict_proba(X_test)
log_loss(y_test, predlr_probas)
Out[38]:
0.5772635746354248

Report

You should be able to report the accuracy of the built model using different evaluation metrics:

Algorithm Jaccard F1-score LogLoss
KNN ? ? NA
Decision Tree ? ? NA
SVM ? ? NA
LogisticRegression ? ? ?

Want to learn more?

IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: SPSS Modeler

Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at Watson Studio

Thanks for completing this lesson!

Author: Saeed Aghabozorgi

Saeed Aghabozorgi, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.


Change Log

Date (YYYY-MM-DD) Version Changed By Change Description
2020-10-27 2.1 Lakshmi Holla Made changes in import statement due to updates in version of sklearn library
2020-08-27 2.0 Malika Singla Added lab to GitLab

© IBM Corporation 2020. All rights reserved.